Goto

Collaborating Authors

 Akita Prefecture


Magnitude 6.7 quake off Aomori triggers tsunami advisory

The Japan Times

Magnitude 6.7 quake off Aomori triggers tsunami advisory Areas under a tsunami advisory are shown in yellow following a magnitude 6.7 earthquake on Friday | JAPAN METEOROLOGICAL AGENCY A magnitude 6.7 earthquake triggered a tsunami advisory for parts of Hokkaido as well as the coasts of Aomori, Iwate and Miyagi prefectures on Friday. The quake struck at 11:44 a.m., registering 4 on Japan's seismic intensity scale in some areas. Waves of up to 1 meter are possible in areas under the advisory, according to the Japan Meteorological Agency (JMA). A tsunami advisory, a level lower than a tsunami warning, urges those in the area to stay away from the ocean. Evacuation is not required under an advisory.


What's behind a surge in bear attacks in Japan?

Al Jazeera

A deadly conflict between bears and humans is playing out across Japan, where authorities have deployed the military to protect locals who are using drone-based alert and surveillance systems to track the bears. Since April this year, at least 13 people have been killed and more than 100 have been injured in bear attacks in the country, according to an October report by the Ministry of Environment. The ministry added that the death toll is the highest since Japan began keeping records of bear attacks in 2006. It is also home to Asiatic black bears - also known as Moon bears - which are smaller in size, weighing between 80-200kg (176-440 pounds), and are found on the mainland, which is more densely populated. Both types of bear have been involved in incidents this year, and both are dangerous to humans to varying degrees.


Japan deploys army to fight bears

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Japan is calling in its army to wrestle its ongoing bear problem . Last month, the country's Ministry of the Environment reported that Asian black bear or moon bear () and brown bear () populations have attacked over 100 people since March. With at least 10 fatalities among the tally, the government announced on November 5 that it is stepping up control efforts by deploying soldiers to Akita prefecture on the island of Honshu in northern Japan. In a statement to reporters, Akita's Governor Kenta Suzuki called the situation "desperate," noting that sightings and attacks are now occurring daily.

  Country:
  Industry:

SoftBank chases actual revenue with OpenAI in corporate Japan

The Japan Times

SoftBank Group's Japanese mobile unit and OpenAI are set to launch AI services for local companies next year. SoftBank Group's Japanese mobile unit and OpenAI will launch AI services for local companies next year, seeking to realize real revenue in the face of growing concerns over sky-high valuations. SoftBank Corp. and Open AI are still fine-tuning the products the two companies are co-developing for Japanese enterprises, said Junichi Miyakawa, president of the country's third-largest mobile carrier. Miyakawa said he has seen a test version of the services, which once launched would "completely change" the speed in which business is done. One feature is voice recognition that would allow users to rely less on manual typing, he said.


Adapting Rule Representation With Four-Parameter Beta Distribution for Learning Classifier Systems

Shiraishi, Hiroki, Hayamizu, Yohei, Hashiyama, Tomonori, Takadama, Keiki, Ishibuchi, Hisao, Nakata, Masaya

arXiv.org Artificial Intelligence

Rule representations significantly influence the search capabilities and decision boundaries within the search space of Learning Classifier Systems (LCSs), a family of rule-based machine learning systems that evolve interpretable models through evolutionary processes. However, it is very difficult to choose an appropriate rule representation for each problem. Additionally, some problems benefit from using different representations for different subspaces within the input space. Thus, an adaptive mechanism is needed to choose an appropriate rule representation for each rule in LCSs. This article introduces a flexible rule representation using a four-parameter beta distribution and integrates it into a fuzzy-style LCS. The four-parameter beta distribution can form various function shapes, and this flexibility enables our LCS to automatically select appropriate representations for different subspaces. Our rule representation can represent crisp/fuzzy decision boundaries in various boundary shapes, such as rectangles and bells, by controlling four parameters, compared to the standard representations such as trapezoidal ones. Leveraging this flexibility, our LCS is designed to adapt the appropriate rule representation for each subspace. Moreover, our LCS incorporates a generalization bias favoring crisp rules where feasible, enhancing model interpretability without compromising accuracy. Experimental results on real-world classification tasks show that our LCS achieves significantly superior test accuracy and produces more compact rule sets. Our implementation is available at https://github.com/YNU-NakataLab/Beta4-UCS. An extended abstract related to this work is available at https://doi.org/10.36227/techrxiv.174900805.59801248/v1.


Probabilistic Functional Neural Networks

Wang, Haixu, Cao, Jiguo

arXiv.org Machine Learning

High-dimensional functional time series (HDFTS) are often characterized by nonlinear trends and high spatial dimensions. Such data poses unique challenges for modeling and forecasting due to the nonlinearity, nonstationarity, and high dimensionality. We propose a novel probabilistic functional neural network (ProFnet) to address these challenges. ProFnet integrates the strengths of feedforward and deep neural networks with probabilistic modeling. The model generates probabilistic forecasts using Monte Carlo sampling and also enables the quantification of uncertainty in predictions. While capturing both temporal and spatial dependencies across multiple regions, ProFnet offers a scalable and unified solution for large datasets. Applications to Japan's mortality rates demonstrate superior performance. This approach enhances predictive accuracy and provides interpretable uncertainty estimates, making it a valuable tool for forecasting complex high-dimensional functional data and HDFTS.


MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines

Jiang, Dongzhi, Zhang, Renrui, Guo, Ziyu, Wu, Yanmin, Lei, Jiayi, Qiu, Pengshuo, Lu, Pan, Chen, Zehui, Song, Guanglu, Gao, Peng, Liu, Yu, Li, Chunyuan, Li, Hongsheng

arXiv.org Artificial Intelligence

The advent of Large Language Models (LLMs) has paved the way for AI search engines, e.g., SearchGPT, showcasing a new paradigm in human-internet interaction. However, most current AI search engines are limited to text-only settings, neglecting the multimodal user queries and the text-image interleaved nature of website information. Recently, Large Multimodal Models (LMMs) have made impressive strides. Yet, whether they can function as AI search engines remains under-explored, leaving the potential of LMMs in multimodal search an open question. To this end, we first design a delicate pipeline, MMSearch-Engine, to empower any LMMs with multimodal search capabilities. On top of this, we introduce MMSearch, a comprehensive evaluation benchmark to assess the multimodal search performance of LMMs. The curated dataset contains 300 manually collected instances spanning 14 subfields, which involves no overlap with the current LMMs' training data, ensuring the correct answer can only be obtained within searching. By using MMSearch-Engine, the LMMs are evaluated by performing three individual tasks (requery, rerank, and summarization), and one challenging end-to-end task with a complete searching process. We conduct extensive experiments on closed-source and open-source LMMs. Among all tested models, GPT-4o with MMSearch-Engine achieves the best results, which surpasses the commercial product, Perplexity Pro, in the end-to-end task, demonstrating the effectiveness of our proposed pipeline. We further present error analysis to unveil current LMMs still struggle to fully grasp the multimodal search tasks, and conduct ablation study to indicate the potential of scaling test-time computation for AI search engine. We hope MMSearch may provide unique insights to guide the future development of multimodal AI search engine. Project Page: https://mmsearch.github.io


Knowledge Graph-Enhanced Large Language Models via Path Selection

Liu, Haochen, Wang, Song, Zhu, Yaochen, Dong, Yushun, Li, Jundong

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown unprecedented performance in various real-world applications. However, they are known to generate factually inaccurate outputs, a.k.a. the hallucination problem. In recent years, incorporating external knowledge extracted from Knowledge Graphs (KGs) has become a promising strategy to improve the factual accuracy of LLM-generated outputs. Nevertheless, most existing explorations rely on LLMs themselves to perform KG knowledge extraction, which is highly inflexible as LLMs can only provide binary judgment on whether a certain knowledge (e.g., a knowledge path in KG) should be used. In addition, LLMs tend to pick only knowledge with direct semantic relationship with the input text, while potentially useful knowledge with indirect semantics can be ignored. In this work, we propose a principled framework KELP with three stages to handle the above problems. Specifically, KELP is able to achieve finer granularity of flexible knowledge extraction by generating scores for knowledge paths with input texts via latent semantic matching. Meanwhile, knowledge paths with indirect semantic relationships with the input text can also be considered via trained encoding between the selected paths in KG and the input text. Experiments on real-world datasets validate the effectiveness of KELP.

  Country:
  Genre: Research Report (1.00)

AI helping Japan railway companies to combat problems with snow

The Japan Times

Japanese railway companies are turning to artificial intelligence to help tackle potential problems for their shinkansen bullet trains caused by accumulations of snow. West Japan Railway Co. is developing an AI system to gauge the amount of snow attached to Hokuriku Shinkansen trains that cut through Niigata, Toyama and Ishikawa prefectures adjacent to the Sea of Japan. The railway operator currently decides how many personnel to deploy for snow clearance a day beforehand, based on information from meteorological data providers and past experience, but it is often not very accurate. AI will gather data from images of trains that have accumulated snow while traveling, study weather conditions and predict the number of personnel necessary for clearance work. Test operations have proved positive so far and the system is set for full introduction next winter.


Job perks on rise in Japan as labor crunch reshapes how companies attract workers

The Japan Times

Misaki Harada wants to quit her job as a receptionist at a restaurant management company in Tokyo and move into marketing for an apparel-maker. But the 24-year-old said she wanted more than just a bigger paycheck. Her next employer would need to improve her quality of life. "If you ask me whether I prefer more money or more flexible working hours, I would choose more flexible working hours," she said. "I want to get married soon and start a family. I want to make sure I have time to take care of my children."